Self-supervised Multi-task Representation Learning for Sequential Medical Images
نویسندگان
چکیده
Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task framework sequential 2D images, which explicitly aims to exploit structures via multiple pretext tasks. Unlike current state-of-the-art methods, are designed only pre-train encoder instance discrimination tasks, proposed can and decoder at same time dense prediction We evaluate representations extracted by on two public whole heart segmentation datasets from different domains. The experimental show that our outperforms MoCo V2, strong baseline. Given small amount of labeled data, networks pre-trained unlabeled data achieve better than their counterparts trained standard supervised approaches.
منابع مشابه
Multi-Task Learning for Sequential Data
The problem of multi-task learning (MTL) is considered for sequential data, such as that typically modeled via a hidden Markov model (HMM). A given task is composed of a set of sequential data, for which an HMM is to be learned, and MTL is employed to learn the multiple task-dependent HMMs jointly, through appropriate sharing of data. The HMM-MTL formulation is implemented in a Bayesian setting...
متن کاملMulti-task Representation Learning for Demographic Prediction
Demographic attributes are important resources for market analysis, which are widely used to characterize different types of users. However, such signals are only available for a small fraction of users due to the difficulty in manual collection process by retailers. Most previous work on this problem explores different types of features and usually predicts different attributes independently. ...
متن کاملMulti-task Multi-domain Representation Learning for Sequence Tagging
Many domain adaptation approaches rely on learning cross domain shared representations to transfer the knowledge learned in one domain to other domains. Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario. We propose a neural network framework that supports domain adaptation for mul...
متن کاملSelf-Paced Multi-Task Learning
Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneo...
متن کاملDeblocking Joint Photographic Experts Group Compressed Images via Self-learning Sparse Representation
JPEG is one of the most widely used image compression method, but it causes annoying blocking artifacts at low bit-rates. Sparse representation is an efficient technique which can solve many inverse problems in image processing applications such as denoising and deblocking. In this paper, a post-processing method is proposed for reducing JPEG blocking effects via sparse representation. In this ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-86523-8_47